Supercon: Designing Your Own Diffractive Optics | Hackaday

2022-10-09 12:15:47 By : Ms. Fiona hu

Kelly Peng is an electrical and optical engineer, and founder of Kura AR. She’s built a fusion reactor, a Raman spectrometer, a DIY structured light camera, a linear particle accelerator, and emotional classifiers for likes and dislikes. In short, we have someone who can do anything, and she came in to talk about one of the dark arts (pun obviously intended): optics.

The entire idea of Kura AR is to build an immersive augmented reality experience, and when it comes to AR glasses, there are two ways of doing it. You could go the Google Glass route and use a small OLED and lenses, but these displays aren’t very bright. Alternatively, you could use a diffractive waveguide, like the Hololens. This is a lot more difficult to manufacture, but the payoff will be a much larger field of view and a much more immersive experience.

The lens that Kelly is using in her AR headset is basically a diffraction grating, or a series of parallel lines on a piece of plastic. These diffraction gratings reflect light, but it’s dependent on the wavelength. Therefore, for a full-color system, you need three layers, one for red, one for blue, and another for green. The trick here is how to manufacture this. Kelly took a Hololens lens apart and took a look at it with an electron microscope, which appears to be made via fancy, and expensive, photolithography.

There is another way, though. The feature sizes on this diffraction grating aren’t too small, and this could conceivably be done through injection molding. With a lot of coding, simulation, and testing, Kelly realized this was manufacturable with somewhat standard injection molding processes, would cost only about $60,000 upfront, and would produce a part for one dollar. That’s much better than whatever process is going into the Hololens, and an amazing technical feat that’s bring the future of AR closer than ever before.

This talk gets deep into diffractive optics. It’s jam-packed with the kind of technical detail you’ll need to know if you’re going to hack together your own AR / VR system. In short, it’s the kind of real-world technical talk that we love. Sit back with some popcorn and your notepad.

Wow, amazing how she researched the manufacturing process to make it more affordable for the end user while using common processes in a new way. Teaching the old dog a new trick that will hopefully kick this technology into the mainstream!

Microsoft should be hiring her as a consultant, if not writing her a check for the millions they’ll save.

Unfortunately due to patents it would be impossible to bring this to market without a big player like Microsoft Google etc. On board.

wow, this is PhD level stuff! Impressive.

But not a PhD in optical engineering. She’s just throwing around the buzzwords that are common in the scene and patents. There is nothing new in what she describes. All the companies that deal in this field have had the same thoughts. Injection molded diffractive optics? Not new at all. I was disappointed. I don’t know what she brings other than ripping off existing patents with dodgy cheap manufacturing.

thanks for letting me know. I’m trying to be less cynical, but it’s good to know when the inner cynic was right after all.

“Kelly Peng is an electrical and optical engineer, and founder of Kura AR. She’s built a fusion reactor, a Raman spectrometer, a DIY structured light camera, a linear particle accelerator, and emotional classifiers for likes and dislikes.”

I love that you recorded and uploaded these videos, and can’t even imagine how much work went into doing it all live, but with how many slides are missed entirely in many of these videos because the camera was showing a close-up of the speaker instead I think skipping having a camera in the room at all and just recording slides + audio would have resulted in better videos (if a better mix of speaker + slides is unobtainable).

But moaning about the medium aside, great talk! I learned a ton, even if the majority of it was over my head.

I don’t mind it. In my opinion slide/speaker exposure ratio is perfect ;)

I think we must have the slides somewhere… I don’t know if we have permission to post them up, but I can ask! That would be cool, in general, to put up with the talks, no?

If you have both the slides and close up video then I would do something like picture in picture, where the close up video is the bottom right hand of the screen except when it obscures the slides.

I’ll just kick in and say that every decision is a trade-off when you don’t have multiple cameras. If you record the audio + slides, you end up missing the speaker’s gestures and body language. Also, big changes between slides can result in over/underexposure you don’t even notice until later.

Personally, I prefer to record the speaker and obtain a copy of the presentation, then inset the speaker with the according slide. Of course, if they are using a laser pointer or rapidly switching between slides, you miss information.

Naw man you gotta record it in 360 stereo so you can don an HMD and feel like you’re there! :)

One of the best methods I’ve seen is using 4:3 slides and having the speaker centred in the extra space of the 16:9 frame. It can be tricky when doing live video, to be sure!

Tell me more about this fusion reactor she built. How did she obtain the raw materials? Is it still in operation?

probably the spherical electrostatic inertial confinement https://en.wikipedia.org/wiki/Inertial_electrostatic_confinement Not state of the art science, but not trivial either—kudos if she pulled it off.

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.

By using our website and services, you expressly agree to the placement of our performance, functionality and advertising cookies. Learn more